Multiple choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, due to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an expensive and time-consuming task. A particularly sensitive aspect of MCQ creation is to devise relevant distractors, i.e., wrong answers that are not easily identifiable as being wrong. This paper studies how a large existing set of manually created answers and distractors for questions over a variety of domains, subjects, and languages can be leveraged to help teachers in creating new MCQs, by the smart reuse of existing distractors. We built several data-driven models based on context-aware question and distractor representations, and compared them with static feature-based models. The proposed models are evaluated with automated metrics and in a realistic user test with teachers. Both automatic and human evaluations indicate that context-aware models consistently outperform a static feature-based approach. For our best-performing context-aware model, on average 3 distractors out of the 10 shown to teachers were rated as high-quality distractors. We create a performance benchmark, and make it public, to enable comparison between different approaches and to introduce a more standardized evaluation of the task. The benchmark contains a test of 298 educational questions covering multiple subjects & languages and a 77k multilingual pool of distractor vocabulary for future research.
translated by 谷歌翻译
技能在就业市场和许多人力资源(HR)过程中起着核心作用。在其他数字经验之后,当今的在线工作市场有候选人希望根据他们的技能看到正确的机会。同样,企业越来越需要使用数据来确保其劳动力中的技能保持未来。但是,有关技能的结构化信息通常缺少,并且基于自我或经理评估的流程已证明与所得数据的采用,完整性和新鲜度有关。鉴于明确或仅隐含地描述了数千种可能的技能标签,并且缺乏精细注释的培训语料库,提取技能是一项艰巨的任务。以前的技能提取工作过于简化任务,将其用于明确的实体检测任务,或者基于手动注释的培训数据,如果应用于完整的技能词汇,这是不可行的。我们根据遥远的字面匹配,提出了一个用于技能提取的端到端系统。我们提出并评估了几种负面验证数据集中的几种负面抽样策略,以提高技能提取对隐式提及技能的推广,尽管在遥远的监督数据中缺乏这种隐性技能。我们观察到,使用ESCO分类法从相关技能中选择负面示例会产生最大的进步,并且在一个模型中结合三种不同的策略进一步提高了性能,在RP@5中最多可达8个百分点。我们介绍了基于ESCO分类法的手动注释评估基准,以进行技能提取,并在其上验证模型。我们发布基准数据集以进行研究目的,以刺激对任务的进一步研究。
translated by 谷歌翻译
这项工作提出了一个新的对话数据集,即cookdial,该数据集促进了对任务知识了解的面向任务的对话系统的研究。该语料库包含260个以人类对任务为导向的对话框,其中代理给出了配方文档,指导用户烹饪菜肴。 Cookdial中的对话框展示了两个独特的功能:(i)对话流与支持文档之间的程序对齐; (ii)复杂的代理决策涉及分割长句子,解释硬说明并在对话框上下文中解决核心。此外,我们在假定的面向任务的对话框系统中确定了三个具有挑战性的(子)任务:(1)用户问题理解,(2)代理操作框架预测和(3)代理响应生成。对于这些任务中的每一个,我们都会开发一个神经基线模型,我们在cookdial数据集上进行了评估。我们公开发布烹饪数据集,包括对话框和食谱文档的丰富注释,以刺激对特定于域的文档接地对话框系统的进一步研究。
translated by 谷歌翻译
软执行器为轻柔的抓握和灵活的操纵等任务提供了一种安全,适应性的方法。但是,由于可变形材料的复杂物理学,创建准确的模型来控制此类系统是具有挑战性的。准确的有限元方法(FEM)模型具有用于闭环使用的过度计算复杂性。使用可区分的模拟器是一种有吸引力的替代方案,但是它们适用于软执行器,可变形材料仍然没有被忽略。本文提出了一个结合两者优势的框架。我们学习了一个由材料属性神经网络和其余操纵任务的分析动力学模型组成的可区分模型。该物理信息模型是使用FEM生成的数据训练的,可用于闭环控制和推理。我们在介电弹性体执行器(DEA)硬币提取任务上评估我们的框架。我们模拟使用DEA使用摩擦接触,使用FEM沿着表面拉动硬币的任务,并评估物理信息模型以进行模拟,控制和推理。与FEM相比,我们的模型达到了<5%的仿真误差,我们将其用作MPC控制器的基础,MPC控制器比无模型的参与者 - 批评者,PD和启发式策略所需的迭代率更少。
translated by 谷歌翻译
本文介绍了一种用于开发面向控制的建筑物的散热模型的数据驱动建模方法。这些型号是通过降低能耗成本的目标而开发的,同时控制建筑物的室内温度,在所需的舒适度限制内。结合白/灰盒物理模型的可解释性和神经网络的表现力,我们提出了一种物理知识的神经网络方法,用于这种建模任务。除了测量的数据和构建参数之外,我们将通过管理这些建筑物的热行为的底层物理编码神经网络。因此,实现了由物理学引导的模型,有助于建模室温和功耗的时间演化以及隐藏状态,即建筑物热质量的温度。这项工作的主要研究贡献是:(1)我们提出了两种物理学的变种信息,为机构的控制定向热建模任务提供了通知的神经网络架构,(2)我们展示这些架构是数据效率的,需要更少培训数据与传统的非物理知识的神经网络相比,(3)我们表明这些架构比传统的神经网络实现更准确的预测,用于更长的预测视野。我们使用模拟和实际字数据测试所提出的架构的预测性能,以演示(2)和(3),并显示所提出的物理知识的神经网络架构可以用于该控制导向的建模问题。
translated by 谷歌翻译
我们考虑文档级实体链接(EL)的任务,在该任务中,对实体的一致决定对整个文档进行共同提及很重要。我们旨在利用文档本身中提及的明确“连接”:我们建议将EL任务加入Coreference解决方案(CoreF)。这是对相关作品的补充,这些工作是利用(i)隐式文档信息(例如,实体提及或通用语言模型之间的潜在关系)或(ii)候选链接之间的联系(例如,从外部知识库中推断出)。具体而言,我们集群提及通过核心链接的,并为所有聚类提及的一个EL强制执行一个EL。后者的约束通过加入EL候选人名单来获得这样的群集提及,从而增加了覆盖范围的额外好处。我们将CoreF+EL问题提出为有向树的结构化预测任务,并使用全球标准化的模型来解决它。与独立对应物相比,两个数据集上的实验结果表明,CoreF和EL任务的提升高达 +5%F1得分。对于一部分硬案例,由于个人提及在其候选实体列表中缺乏正确的EL,我们的准确性提高了50%。
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Many popular policy gradient methods for reinforcement learning follow a biased approximation of the policy gradient known as the discounted approximation. While it has been shown that the discounted approximation of the policy gradient is not the gradient of any objective function, little else is known about its convergence behavior or properties. In this paper, we show that if the discounted approximation is followed such that the discount factor is increased slowly at a rate related to a decreasing learning rate, the resulting method recovers the standard guarantees of gradient ascent on the undiscounted objective.
translated by 谷歌翻译
Traditionally, data analysis and theory have been viewed as separate disciplines, each feeding into fundamentally different types of models. Modern deep learning technology is beginning to unify these two disciplines and will produce a new class of predictively powerful space weather models that combine the physical insights gained by data and theory. We call on NASA to invest in the research and infrastructure necessary for the heliophysics' community to take advantage of these advances.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译